Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 92
Filter
1.
IEEE J Biomed Health Inform ; 24(10): 2798-2805, 2020 10.
Article in English | MEDLINE | ID: covidwho-2282971

ABSTRACT

Chest computed tomography (CT) becomes an effective tool to assist the diagnosis of coronavirus disease-19 (COVID-19). Due to the outbreak of COVID-19 worldwide, using the computed-aided diagnosis technique for COVID-19 classification based on CT images could largely alleviate the burden of clinicians. In this paper, we propose an Adaptive Feature Selection guided Deep Forest (AFS-DF) for COVID-19 classification based on chest CT images. Specifically, we first extract location-specific features from CT images. Then, in order to capture the high-level representation of these features with the relatively small-scale data, we leverage a deep forest model to learn high-level representation of the features. Moreover, we propose a feature selection method based on the trained deep forest model to reduce the redundancy of features, where the feature selection could be adaptively incorporated with the COVID-19 classification model. We evaluated our proposed AFS-DF on COVID-19 dataset with 1495 patients of COVID-19 and 1027 patients of community acquired pneumonia (CAP). The accuracy (ACC), sensitivity (SEN), specificity (SPE), AUC, precision and F1-score achieved by our method are 91.79%, 93.05%, 89.95%, 96.35%, 93.10% and 93.07%, respectively. Experimental results on the COVID-19 dataset suggest that the proposed AFS-DF achieves superior performance in COVID-19 vs. CAP classification, compared with 4 widely used machine learning methods.


Subject(s)
Betacoronavirus , Clinical Laboratory Techniques/statistics & numerical data , Coronavirus Infections/diagnostic imaging , Coronavirus Infections/diagnosis , Pneumonia, Viral/diagnostic imaging , Pneumonia, Viral/diagnosis , Tomography, X-Ray Computed/statistics & numerical data , COVID-19 , COVID-19 Testing , Computational Biology , Coronavirus Infections/classification , Databases, Factual/statistics & numerical data , Deep Learning , Humans , Neural Networks, Computer , Pandemics/classification , Pneumonia, Viral/classification , Radiographic Image Interpretation, Computer-Assisted/statistics & numerical data , Radiography, Thoracic/statistics & numerical data , SARS-CoV-2
2.
Radiology ; 305(2): 454-465, 2022 11.
Article in English | MEDLINE | ID: covidwho-1950321

ABSTRACT

Background Developing deep learning models for radiology requires large data sets and substantial computational resources. Data set size limitations can be further exacerbated by distribution shifts, such as rapid changes in patient populations and standard of care during the COVID-19 pandemic. A common partial mitigation is transfer learning by pretraining a "generic network" on a large nonmedical data set and then fine-tuning on a task-specific radiology data set. Purpose To reduce data set size requirements for chest radiography deep learning models by using an advanced machine learning approach (supervised contrastive [SupCon] learning) to generate chest radiography networks. Materials and Methods SupCon helped generate chest radiography networks from 821 544 chest radiographs from India and the United States. The chest radiography networks were used as a starting point for further machine learning model development for 10 prediction tasks (eg, airspace opacity, fracture, tuberculosis, and COVID-19 outcomes) by using five data sets comprising 684 955 chest radiographs from India, the United States, and China. Three model development setups were tested (linear classifier, nonlinear classifier, and fine-tuning the full network) with different data set sizes from eight to 85. Results Across a majority of tasks, compared with transfer learning from a nonmedical data set, SupCon reduced label requirements up to 688-fold and improved the area under the receiver operating characteristic curve (AUC) at matching data set sizes. At the extreme low-data regimen, training small nonlinear models by using only 45 chest radiographs yielded an AUC of 0.95 (noninferior to radiologist performance) in classifying microbiology-confirmed tuberculosis in external validation. At a more moderate data regimen, training small nonlinear models by using only 528 chest radiographs yielded an AUC of 0.75 in predicting severe COVID-19 outcomes. Conclusion Supervised contrastive learning enabled performance comparable to state-of-the-art deep learning models in multiple clinical tasks by using as few as 45 images and is a promising method for predictive modeling with use of small data sets and for predicting outcomes in shifting patient populations. © RSNA, 2022 Online supplemental material is available for this article.


Subject(s)
COVID-19 , Deep Learning , Humans , Radiography, Thoracic/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Pandemics , COVID-19/diagnostic imaging , Retrospective Studies , Radiography , Machine Learning
3.
Sci Rep ; 12(1): 8922, 2022 05 26.
Article in English | MEDLINE | ID: covidwho-1864771

ABSTRACT

The outbreak of COVID-19, since its appearance, has affected about 200 countries and endangered millions of lives. COVID-19 is extremely contagious disease, and it can quickly incapacitate the healthcare systems if infected cases are not handled timely. Several Conventional Neural Networks (CNN) based techniques have been developed to diagnose the COVID-19. These techniques require a large, labelled dataset to train the algorithm fully, but there are not too many labelled datasets. To mitigate this problem and facilitate the diagnosis of COVID-19, we developed a self-attention transformer-based approach having self-attention mechanism using CT slices. The architecture of transformer can exploit the ample unlabelled datasets using pre-training. The paper aims to compare the performances of self-attention transformer-based approach with CNN and Ensemble classifiers for diagnosis of COVID-19 using binary Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) infection and multi-class Hybrid-learning for UnbiaSed predicTion of COVID-19 (HUST-19) CT scan dataset. To perform this comparison, we have tested Deep learning-based classifiers and ensemble classifiers with proposed approach using CT scan images. Proposed approach is more effective in detection of COVID-19 with an accuracy of 99.7% on multi-class HUST-19, whereas 98% on binary class SARS-CoV-2 dataset. Cross corpus evaluation achieves accuracy of 93% by training the model with Hust19 dataset and testing using Brazilian COVID dataset.


Subject(s)
COVID-19 , Algorithms , COVID-19/diagnosis , Humans , Neural Networks, Computer , Radiographic Image Interpretation, Computer-Assisted/methods , SARS-CoV-2
4.
PLoS One ; 17(3): e0263916, 2022.
Article in English | MEDLINE | ID: covidwho-1742004

ABSTRACT

OBJECTIVES: Ground-glass opacity (GGO)-a hazy, gray appearing density on computed tomography (CT) of lungs-is one of the hallmark features of SARS-CoV-2 in COVID-19 patients. This AI-driven study is focused on segmentation, morphology, and distribution patterns of GGOs. METHOD: We use an AI-driven unsupervised machine learning approach called PointNet++ to detect and quantify GGOs in CT scans of COVID-19 patients and to assess the severity of the disease. We have conducted our study on the "MosMedData", which contains CT lung scans of 1110 patients with or without COVID-19 infections. We quantify the morphologies of GGOs using Minkowski tensors and compute the abnormality score of individual regions of segmented lung and GGOs. RESULTS: PointNet++ detects GGOs with the highest evaluation accuracy (98%), average class accuracy (95%), and intersection over union (92%) using only a fraction of 3D data. On average, the shapes of GGOs in the COVID-19 datasets deviate from sphericity by 15% and anisotropies in GGOs are dominated by dipole and hexapole components. These anisotropies may help to quantitatively delineate GGOs of COVID-19 from other lung diseases. CONCLUSION: The PointNet++ and the Minkowski tensor based morphological approach together with abnormality analysis will provide radiologists and clinicians with a valuable set of tools when interpreting CT lung scans of COVID-19 patients. Implementation would be particularly useful in countries severely devastated by COVID-19 such as India, where the number of cases has outstripped available resources creating delays or even breakdowns in patient care. This AI-driven approach synthesizes both the unique GGO distribution pattern and severity of the disease to allow for more efficient diagnosis, triaging and conservation of limited resources.


Subject(s)
COVID-19/diagnostic imaging , Lung/pathology , Radiographic Image Interpretation, Computer-Assisted/methods , Artificial Intelligence , COVID-19/pathology , Female , Humans , India , Lung/diagnostic imaging , Male , Patient Acuity , Retrospective Studies , Tomography, X-Ray Computed/methods , Unsupervised Machine Learning
5.
BMC Pulm Med ; 22(1): 1, 2022 Jan 03.
Article in English | MEDLINE | ID: covidwho-1608729

ABSTRACT

BACKGROUND: Quantitative evaluation of radiographic images has been developed and suggested for the diagnosis of coronavirus disease 2019 (COVID-19). However, there are limited opportunities to use these image-based diagnostic indices in clinical practice. Our aim in this study was to evaluate the utility of a novel visually-based classification of pulmonary findings from computed tomography (CT) images of COVID-19 patients with the following three patterns defined: peripheral, multifocal, and diffuse findings of pneumonia. We also evaluated the prognostic value of this classification to predict the severity of COVID-19. METHODS: This was a single-center retrospective cohort study of patients hospitalized with COVID-19 between January 1st and September 30th, 2020, who presented with suspicious findings on CT lung images at admission (n = 69). We compared the association between the three predefined patterns (peripheral, multifocal, and diffuse), admission to the intensive care unit, tracheal intubation, and death. We tested quantitative CT analysis as an outcome predictor for COVID-19. Quantitative CT analysis was performed using a semi-automated method (Thoracic Volume Computer-Assisted Reading software, GE Health care, United States). Lungs were divided by Hounsfield unit intervals. Compromised lung (%CL) volume was the sum of poorly and non-aerated volumes (- 500, 100 HU). We collected patient clinical data, including demographic and clinical variables at the time of admission. RESULTS: Patients with a diffuse pattern were intubated more frequently and for a longer duration than patients with a peripheral or multifocal pattern. The following clinical variables were significantly different between the diffuse pattern and peripheral and multifocal groups: body temperature (p = 0.04), lymphocyte count (p = 0.01), neutrophil count (p = 0.02), c-reactive protein (p < 0.01), lactate dehydrogenase (p < 0.01), Krebs von den Lungen-6 antigen (p < 0.01), D-dimer (p < 0.01), and steroid (p = 0.01) and favipiravir (p = 0.03) administration. CONCLUSIONS: Our simple visual assessment of CT images can predict the severity of illness, a resulting decrease in respiratory function, and the need for supplemental respiratory ventilation among patients with COVID-19.


Subject(s)
COVID-19/classification , COVID-19/diagnostic imaging , Tomography, X-Ray Computed , Adult , Aged , Amides/therapeutic use , Antiviral Agents/therapeutic use , Body Temperature , C-Reactive Protein/metabolism , COVID-19/physiopathology , Female , Fibrin Fibrinogen Degradation Products/metabolism , Humans , L-Lactate Dehydrogenase/blood , Lung/diagnostic imaging , Lymphocyte Count , Male , Middle Aged , Mucin-1/blood , Neutrophils , Predictive Value of Tests , Prognosis , Pyrazines/therapeutic use , Radiographic Image Interpretation, Computer-Assisted , Retrospective Studies , SARS-CoV-2 , Steroids/therapeutic use , COVID-19 Drug Treatment
6.
Sci Rep ; 11(1): 24065, 2021 12 15.
Article in English | MEDLINE | ID: covidwho-1585806

ABSTRACT

COVID-19 is a respiratory disease that causes infection in both lungs and the upper respiratory tract. The World Health Organization (WHO) has declared it a global pandemic because of its rapid spread across the globe. The most common way for COVID-19 diagnosis is real-time reverse transcription-polymerase chain reaction (RT-PCR) which takes a significant amount of time to get the result. Computer based medical image analysis is more beneficial for the diagnosis of such disease as it can give better results in less time. Computed Tomography (CT) scans are used to monitor lung diseases including COVID-19. In this work, a hybrid model for COVID-19 detection has developed which has two key stages. In the first stage, we have fine-tuned the parameters of the pre-trained convolutional neural networks (CNNs) to extract some features from the COVID-19 affected lungs. As pre-trained CNNs, we have used two standard CNNs namely, GoogleNet and ResNet18. Then, we have proposed a hybrid meta-heuristic feature selection (FS) algorithm, named as Manta Ray Foraging based Golden Ratio Optimizer (MRFGRO) to select the most significant feature subset. The proposed model is implemented over three publicly available datasets, namely, COVID-CT dataset, SARS-COV-2 dataset, and MOSMED dataset, and attains state-of-the-art classification accuracies of 99.15%, 99.42% and 95.57% respectively. Obtained results confirm that the proposed approach is quite efficient when compared to the local texture descriptors used for COVID-19 detection from chest CT-scan images.


Subject(s)
COVID-19/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Algorithms , COVID-19 Testing/methods , Deep Learning , Heuristics , Humans , Neural Networks, Computer , Tomography, X-Ray Computed
7.
Sci Rep ; 11(1): 23914, 2021 12 13.
Article in English | MEDLINE | ID: covidwho-1569278

ABSTRACT

Chest X-ray (CXR) images have been one of the important diagnosis tools used in the COVID-19 disease diagnosis. Deep learning (DL)-based methods have been used heavily to analyze these images. Compared to other DL-based methods, the bag of deep visual words-based method (BoDVW) proposed recently is shown to be a prominent representation of CXR images for their better discriminability. However, single-scale BoDVW features are insufficient to capture the detailed semantic information of the infected regions in the lungs as the resolution of such images varies in real application. In this paper, we propose a new multi-scale bag of deep visual words (MBoDVW) features, which exploits three different scales of the 4th pooling layer's output feature map achieved from VGG-16 model. For MBoDVW-based features, we perform the Convolution with Max pooling operation over the 4th pooling layer using three different kernels: [Formula: see text], [Formula: see text], and [Formula: see text]. We evaluate our proposed features with the Support Vector Machine (SVM) classification algorithm on four CXR public datasets (CD1, CD2, CD3, and CD4) with over 5000 CXR images. Experimental results show that our method produces stable and prominent classification accuracy (84.37%, 88.88%, 90.29%, and 83.65% on CD1, CD2, CD3, and CD4, respectively).


Subject(s)
COVID-19/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Algorithms , Databases, Factual , Deep Learning , Humans , Support Vector Machine
8.
Comput Math Methods Med ; 2021: 7259414, 2021.
Article in English | MEDLINE | ID: covidwho-1533111

ABSTRACT

In this paper, based on the improved convolutional neural network, in-depth analysis of the CT image of the new coronary pneumonia, using the U-Net series of deep neural networks to semantically segment the CT image of the new coronary pneumonia, to obtain the new coronary pneumonia area as the foreground and the remaining areas as the background of the binary image, provides a basis for subsequent image diagnosis. Secondly, the target-detection framework Faster RCNN extracts features from the CT image of the new coronary pneumonia tumor, obtains a higher-level abstract representation of the data, determines the lesion location of the new coronary pneumonia tumor, and gives its bounding box in the image. By generating an adversarial network to diagnose the lesion area of the CT image of the new coronary pneumonia tumor, obtaining a complete image of the new coronary pneumonia, achieving the effect of the CT image diagnosis of the new coronary pneumonia tumor, and three-dimensionally reconstructing the complete new coronary pneumonia model, filling the current the gap in this aspect, provide a basis to produce new coronary pneumonia prosthesis and improve the accuracy of diagnosis.


Subject(s)
Algorithms , COVID-19/diagnostic imaging , Neural Networks, Computer , Tomography, X-Ray Computed/statistics & numerical data , COVID-19/diagnosis , Computational Biology , Databases, Factual , Deep Learning , Diagnosis, Computer-Assisted/statistics & numerical data , Humans , Imaging, Three-Dimensional/statistics & numerical data , Pandemics , Radiographic Image Interpretation, Computer-Assisted/statistics & numerical data , SARS-CoV-2
9.
J Comput Assist Tomogr ; 45(6): 970-978, 2021.
Article in English | MEDLINE | ID: covidwho-1440699

ABSTRACT

OBJECTIVE: To quantitatively evaluate computed tomography (CT) parameters of coronavirus disease 2019 (COVID-19) pneumonia an artificial intelligence (AI)-based software in different clinical severity groups during the disease course. METHODS: From March 11 to April 15, 2020, 51 patients (age, 18-84 years; 28 men) diagnosed and hospitalized with COVID-19 pneumonia with a total of 116 CT scans were enrolled in the study. Patients were divided into mild (n = 12), moderate (n = 31), and severe (n = 8) groups based on clinical severity. An AI-based quantitative CT analysis, including lung volume, opacity score, opacity volume, percentage of opacity, and mean lung density, was performed in initial and follow-up CTs obtained at different time points. Receiver operating characteristic analysis was performed to find the diagnostic ability of quantitative CT parameters for discriminating severe from nonsevere pneumonia. RESULTS: In baseline assessment, the severe group had significantly higher opacity score, opacity volume, higher percentage of opacity, and higher mean lung density than the moderate group (all P ≤ 0.001). Through consecutive time points, the severe group had a significant decrease in lung volume (P = 0.006), a significant increase in total opacity score (P = 0.003), and percentage of opacity (P = 0.007). A significant increase in total opacity score was also observed for the mild group (P = 0.011). Residual opacities were observed in all groups. The involvement of more than 4 lobes (sensitivity, 100%; specificity, 65.26%), total opacity score greater than 4 (sensitivity, 100%; specificity, 64.21), total opacity volume greater than 337.4 mL (sensitivity, 80.95%; specificity, 84.21%), percentage of opacity greater than 11% (sensitivity, 80.95%; specificity, 88.42%), total high opacity volume greater than 10.5 mL (sensitivity, 95.24%; specificity, 66.32%), percentage of high opacity greater than 0.8% (sensitivity, 85.71%; specificity, 80.00%) and mean lung density HU greater than -705 HU (sensitivity, 57.14%; specificity, 90.53%) were related to severe pneumonia. CONCLUSIONS: An AI-based quantitative CT analysis is an objective tool in demonstrating disease severity and can also assist the clinician in follow-up by providing information about the disease course and prognosis according to different clinical severity groups.


Subject(s)
Artificial Intelligence , COVID-19/diagnostic imaging , Lung/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Adolescent , Adult , Aged , Aged, 80 and over , Evaluation Studies as Topic , Female , Humans , Male , Middle Aged , Reproducibility of Results , Retrospective Studies , SARS-CoV-2 , Sensitivity and Specificity , Severity of Illness Index , Time , Young Adult
10.
Sci Rep ; 11(1): 18478, 2021 09 16.
Article in English | MEDLINE | ID: covidwho-1415957

ABSTRACT

With the presence of novel coronavirus disease at the end of 2019, several approaches were proposed to help physicians detect the disease, such as using deep learning to recognize lung involvement based on the pattern of pneumonia. These approaches rely on analyzing the CT images and exploring the COVID-19 pathologies in the lung. Most of the successful methods are based on the deep learning technique, which is state-of-the-art. Nevertheless, the big drawback of the deep approaches is their need for many samples, which is not always possible. This work proposes a combined deep architecture that benefits both employed architectures of DenseNet and CapsNet. To more generalize the deep model, we propose a regularization term with much fewer parameters. The network convergence significantly improved, especially when the number of training data is small. We also propose a novel Cost-sensitive loss function for imbalanced data that makes our model feasible for the condition with a limited number of positive data. Our novelties make our approach more intelligent and potent in real-world situations with imbalanced data, popular in hospitals. We analyzed our approach on two publicly available datasets, HUST and COVID-CT, with different protocols. In the first protocol of HUST, we followed the original paper setup and outperformed it. With the second protocol of HUST, we show our approach superiority concerning imbalanced data. Finally, with three different validations of the COVID-CT, we provide evaluations in the presence of a low number of data along with a comparison with state-of-the-art.


Subject(s)
COVID-19/diagnostic imaging , Lung/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Algorithms , Deep Learning , Early Diagnosis , Humans , Neural Networks, Computer , Tomography, X-Ray Computed
11.
Sci Rep ; 11(1): 15523, 2021 09 01.
Article in English | MEDLINE | ID: covidwho-1392879

ABSTRACT

Chest radiography (CXR) is the most widely-used thoracic clinical imaging modality and is crucial for guiding the management of cardiothoracic conditions. The detection of specific CXR findings has been the main focus of several artificial intelligence (AI) systems. However, the wide range of possible CXR abnormalities makes it impractical to detect every possible condition by building multiple separate systems, each of which detects one or more pre-specified conditions. In this work, we developed and evaluated an AI system to classify CXRs as normal or abnormal. For training and tuning the system, we used a de-identified dataset of 248,445 patients from a multi-city hospital network in India. To assess generalizability, we evaluated our system using 6 international datasets from India, China, and the United States. Of these datasets, 4 focused on diseases that the AI was not trained to detect: 2 datasets with tuberculosis and 2 datasets with coronavirus disease 2019. Our results suggest that the AI system trained using a large dataset containing a diverse array of CXR abnormalities generalizes to new patient populations and unseen diseases. In a simulated workflow where the AI system prioritized abnormal cases, the turnaround time for abnormal cases reduced by 7-28%. These results represent an important step towards evaluating whether AI can be safely used to flag cases in a general setting where previously unseen abnormalities exist. Lastly, to facilitate the continued development of AI models for CXR, we release our collected labels for the publicly available dataset.


Subject(s)
COVID-19/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Tuberculosis/diagnostic imaging , Adult , Aged , Algorithms , Case-Control Studies , China , Deep Learning , Female , Humans , India , Male , Middle Aged , Radiography, Thoracic , United States
12.
Sci Rep ; 11(1): 17318, 2021 08 27.
Article in English | MEDLINE | ID: covidwho-1376210

ABSTRACT

Among the most leading causes of mortality across the globe are infectious diseases which have cost tremendous lives with the latest being coronavirus (COVID-19) that has become the most recent challenging issue. The extreme nature of this infectious virus and its ability to spread without control has made it mandatory to find an efficient auto-diagnosis system to assist the people who work in touch with the patients. As fuzzy logic is considered a powerful technique for modeling vagueness in medical practice, an Adaptive Neuro-Fuzzy Inference System (ANFIS) was proposed in this paper as a key rule for automatic COVID-19 detection from chest X-ray images based on the characteristics derived by texture analysis using gray level co-occurrence matrix (GLCM) technique. Unlike the proposed method, especially deep learning-based approaches, the proposed ANFIS-based method can work on small datasets. The results were promising performance accuracy, and compared with the other state-of-the-art techniques, the proposed method gives the same performance as the deep learning with complex architectures using many backbone.


Subject(s)
COVID-19/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Algorithms , Deep Learning , Early Diagnosis , Fuzzy Logic , Humans , Radiography
13.
Comput Med Imaging Graph ; 92: 101957, 2021 09.
Article in English | MEDLINE | ID: covidwho-1330724

ABSTRACT

Lung cancer is one of the most common and deadly malignant cancers. Accurate lung tumor segmentation from CT is therefore very important for correct diagnosis and treatment planning. The automated lung tumor segmentation is challenging due to the high variance in appearance and shape of the targeting tumors. To overcome the challenge, we present an effective 3D U-Net equipped with ResNet architecture and a two-pathway deep supervision mechanism to increase the network's capacity for learning richer representations of lung tumors from global and local perspectives. Extensive experiments on two real medical datasets: the lung CT dataset from Liaoning Cancer Hospital in China with 220 cases and the public dataset of TCIA with 422 cases. Our experiments demonstrate that our model achieves an average dice score (0.675), sensitivity (0.731) and F1-score (0.682) on the dataset from Liaoning Cancer Hospital, and an average dice score (0.691), sensitivity (0.746) and F1-score (0.724) on the TCIA dataset, respectively. The results demonstrate that the proposed 3D MSDS-UNet outperforms the state-of-the-art segmentation models for segmenting all scales of tumors, especially for small tumors. Moreover, we evaluated our proposed MSDS-UNet on another challenging volumetric medical image segmentation task: COVID-19 lung infection segmentation, which shows consistent improvement in the segmentation performance.


Subject(s)
COVID-19/diagnostic imaging , Imaging, Three-Dimensional , Lung Neoplasms/diagnostic imaging , Pneumonia, Viral/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Supervised Machine Learning , Tomography, X-Ray Computed , China , Humans , Pneumonia, Viral/virology , SARS-CoV-2
14.
IEEE J Biomed Health Inform ; 25(7): 2363-2373, 2021 07.
Article in English | MEDLINE | ID: covidwho-1328981

ABSTRACT

COVID-19 pneumonia is a disease that causes an existential health crisis in many people by directly affecting and damaging lung cells. The segmentation of infected areas from computed tomography (CT) images can be used to assist and provide useful information for COVID-19 diagnosis. Although several deep learning-based segmentation methods have been proposed for COVID-19 segmentation and have achieved state-of-the-art results, the segmentation accuracy is still not high enough (approximately 85%) due to the variations of COVID-19 infected areas (such as shape and size variations) and the similarities between COVID-19 and non-COVID-infected areas. To improve the segmentation accuracy of COVID-19 infected areas, we propose an interactive attention refinement network (Attention RefNet). The interactive attention refinement network can be connected with any segmentation network and trained with the segmentation network in an end-to-end fashion. We propose a skip connection attention module to improve the important features in both segmentation and refinement networks and a seed point module to enhance the important seeds (positions) for interactive refinement. The effectiveness of the proposed method was demonstrated on public datasets (COVID-19CTSeg and MICCAI) and our private multicenter dataset. The segmentation accuracy was improved to more than 90%. We also confirmed the generalizability of the proposed network on our multicenter dataset. The proposed method can still achieve high segmentation accuracy.


Subject(s)
COVID-19/diagnostic imaging , Deep Learning , Radiographic Image Interpretation, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Databases, Factual , Humans , Lung/diagnostic imaging
15.
IEEE J Biomed Health Inform ; 25(7): 2376-2387, 2021 07.
Article in English | MEDLINE | ID: covidwho-1328979

ABSTRACT

Researchers seek help from deep learning methods to alleviate the enormous burden of reading radiological images by clinicians during the COVID-19 pandemic. However, clinicians are often reluctant to trust deep models due to their black-box characteristics. To automatically differentiate COVID-19 and community-acquired pneumonia from healthy lungs in radiographic imaging, we propose an explainable attention-transfer classification model based on the knowledge distillation network structure. The attention transfer direction always goes from the teacher network to the student network. Firstly, the teacher network extracts global features and concentrates on the infection regions to generate attention maps. It uses a deformable attention module to strengthen the response of infection regions and to suppress noise in irrelevant regions with an expanded reception field. Secondly, an image fusion module combines attention knowledge transferred from teacher network to student network with the essential information in original input. While the teacher network focuses on global features, the student branch focuses on irregularly shaped lesion regions to learn discriminative features. Lastly, we conduct extensive experiments on public chest X-ray and CT datasets to demonstrate the explainability of the proposed architecture in diagnosing COVID-19.


Subject(s)
COVID-19/diagnostic imaging , Deep Learning , Radiographic Image Interpretation, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Algorithms , Humans , Lung/diagnostic imaging , SARS-CoV-2
16.
J Med Imaging Radiat Oncol ; 65(5): 498-517, 2021 Aug.
Article in English | MEDLINE | ID: covidwho-1314002

ABSTRACT

Deep learning (DL) has shown rapid advancement and considerable promise when applied to the automatic detection of diseases using CXRs. This is important given the widespread use of CXRs across the world in diagnosing significant pathologies, and the lack of trained radiologists to report them. This review article introduces the basic concepts of DL as applied to CXR image analysis including basic deep neural network (DNN) structure, the use of transfer learning and the application of data augmentation. It then reviews the current literature on how DNN models have been applied to the detection of common CXR abnormalities (e.g. lung nodules, pneumonia, tuberculosis and pneumothorax) over the last few years. This includes DL approaches employed for the classification of multiple different diseases (multi-class classification). Performance of different techniques and models and their comparison with human observers are presented. Some of the challenges facing DNN models, including their future implementation and relationships to radiologists, are also discussed.


Subject(s)
Deep Learning , Humans , Radiographic Image Interpretation, Computer-Assisted , Radiography, Thoracic , X-Rays
17.
J Healthc Eng ; 2021: 5513679, 2021.
Article in English | MEDLINE | ID: covidwho-1286755

ABSTRACT

The world is experiencing an unprecedented crisis due to the coronavirus disease (COVID-19) outbreak that has affected nearly 216 countries and territories across the globe. Since the pandemic outbreak, there is a growing interest in computational model-based diagnostic technologies to support the screening and diagnosis of COVID-19 cases using medical imaging such as chest X-ray (CXR) scans. It is discovered in initial studies that patients infected with COVID-19 show abnormalities in their CXR images that represent specific radiological patterns. Still, detection of these patterns is challenging and time-consuming even for skilled radiologists. In this study, we propose a novel convolutional neural network- (CNN-) based deep learning fusion framework using the transfer learning concept where parameters (weights) from different models are combined into a single model to extract features from images which are then fed to a custom classifier for prediction. We use gradient-weighted class activation mapping to visualize the infected areas of CXR images. Furthermore, we provide feature representation through visualization to gain a deeper understanding of the class separability of the studied models with respect to COVID-19 detection. Cross-validation studies are used to assess the performance of the proposed models using open-access datasets containing healthy and both COVID-19 and other pneumonia infected CXR images. Evaluation results show that the best performing fusion model can attain a classification accuracy of 95.49% with a high level of sensitivity and specificity.


Subject(s)
COVID-19/diagnostic imaging , Deep Learning , Radiographic Image Interpretation, Computer-Assisted/methods , Radiography, Thoracic/methods , Humans , Lung/diagnostic imaging , SARS-CoV-2 , Sensitivity and Specificity
18.
J Healthc Eng ; 2021: 6658058, 2021.
Article in English | MEDLINE | ID: covidwho-1277017

ABSTRACT

The COVID-19 pandemic has a significant negative effect on people's health, as well as on the world's economy. Polymerase chain reaction (PCR) is one of the main tests used to detect COVID-19 infection. However, it is expensive, time-consuming, and lacks sufficient accuracy. In recent years, convolutional neural networks have grabbed many researchers' attention in the machine learning field, due to its high diagnosis accuracy, especially the medical image recognition. Many architectures such as Inception, ResNet, DenseNet, and VGG16 have been proposed and gained an excellent performance at a low computational cost. Moreover, in a way to accelerate the training of these traditional architectures, residual connections are combined with inception architecture. Therefore, many hybrid architectures such as Inception-ResNetV2 are further introduced. This paper proposes an enhanced Inception-ResNetV2 deep learning model that can diagnose chest X-ray (CXR) scans with high accuracy. Besides, a Grad-CAM algorithm is used to enhance the visualization of the infected regions of the lungs in CXR images. Compared with state-of-the-art methods, our proposed paper proves superiority in terms of accuracy, recall, precision, and F1-measure.


Subject(s)
COVID-19/diagnosis , Deep Learning , Radiographic Image Interpretation, Computer-Assisted/methods , Radiography, Thoracic/methods , SARS-CoV-2 , Algorithms , Diagnosis, Differential , Humans , Lung/diagnostic imaging , Pneumonia, Viral/diagnostic imaging
19.
PLoS One ; 16(6): e0252440, 2021.
Article in English | MEDLINE | ID: covidwho-1259242

ABSTRACT

Chest X-rays (CXRs) can help triage for Coronavirus disease (COVID-19) patients in resource-constrained environments, and a computer-aided detection system (CAD) that can identify pneumonia on CXR may help the triage of patients in those environment where expert radiologists are not available. However, the performance of existing CAD for identifying COVID-19 and associated pneumonia on CXRs has been scarcely investigated. In this study, CXRs of patients with and without COVID-19 confirmed by reverse transcriptase polymerase chain reaction (RT-PCR) were retrospectively collected from four and one institution, respectively, and a commercialized, regulatory-approved CAD that can identify various abnormalities including pneumonia was used to analyze each CXR. Performance of the CAD was evaluated using area under the receiver operating characteristic curves (AUCs), with reference standards of the RT-PCR results and the presence of findings of pneumonia on chest CTs obtained within 24 hours from the CXR. For comparison, 5 thoracic radiologists and 5 non-radiologist physicians independently interpreted the CXRs. Afterward, they re-interpreted the CXRs with corresponding CAD results. The performance of CAD (AUCs, 0.714 and 0.790 against RT-PCR and chest CT, respectively hereinafter) were similar with those of thoracic radiologists (AUCs, 0.701 and 0.784), and higher than those of non-radiologist physicians (AUCs, 0.584 and 0.650). Non-radiologist physicians showed significantly improved performance when assisted with the CAD (AUCs, 0.584 to 0.664 and 0.650 to 0.738). In addition, inter-reader agreement among physicians was also improved in the CAD-assisted interpretation (Fleiss' kappa coefficient, 0.209 to 0.322). In conclusion, radiologist-level performance of the CAD in identifying COVID-19 and associated pneumonia on CXR and enhanced performance of non-radiologist physicians with the CAD assistance suggest that the CAD can support physicians in interpreting CXRs and helping image-based triage of COVID-19 patients in resource-constrained environment.


Subject(s)
COVID-19/diagnostic imaging , Deep Learning , Lung , Radiographic Image Interpretation, Computer-Assisted , Aged , Female , Humans , Lung/diagnostic imaging , Lung/pathology , Male , Middle Aged , Radiography, Thoracic , Republic of Korea/epidemiology , Retrospective Studies , Tomography, X-Ray Computed
20.
Diagn Interv Imaging ; 102(5): 305-312, 2021 May.
Article in English | MEDLINE | ID: covidwho-1237673

ABSTRACT

PURPOSE: The purpose of this study was to characterize the technical capabilities and feasibility of a large field-of-view clinical spectral photon-counting computed tomography (SPCCT) prototype for high-resolution (HR) lung imaging. MATERIALS AND METHODS: Measurement of modulation transfer function (MTF) and acquisition of a line pairs phantom were performed. An anthropomorphic lung nodule phantom was scanned with standard (120kVp, 62mAs), low (120kVp, 11mAs), and ultra-low (80kVp, 3mAs) radiation doses. A human volunteer underwent standard (120kVp, 63mAs) and low (120kVp, 11mAs) dose scans after approval by the ethics committee. HR images were reconstructed with 1024 matrix, 300mm field of view and 0.25mm slice thickness using a filtered-back projection (FBP) and two levels of iterative reconstruction (iDose 5 and 9). The conspicuity and sharpness of various lung structures (distal airways, vessels, fissures and proximal bronchial wall), image noise, and overall image quality were independently analyzed by three radiologists and compared to a previous HR lung CT examination of the same volunteer performed with a conventional CT equipped with energy integrating detectors (120kVp, 10mAs, FBP). RESULTS: Ten percent MTF was measured at 22.3lp/cm with a cut-off at 31lp/cm. Up to 28lp/cm were depicted. While mixed and solid nodules were easily depicted on standard and low-dose phantom images, higher iDose levels and slice thicknesses (1mm) were needed to visualize ground-glass components on ultra-low-dose images. Standard dose SPCCT images of in vivo lung structures were of greater conspicuity and sharpness, with greater overall image quality, and similar image noise (despite a flux reduction of 23%) to conventional CT images. Low-dose SPCCT images were of greater or similar conspicuity and sharpness, similar overall image quality, and lower but acceptable image noise (despite a flux reduction of 89%). CONCLUSIONS: A large field-of-view SPCCT prototype demonstrates HR technical capabilities and high image quality for high resolution lung CT in human.


Subject(s)
Lung , Tomography, X-Ray Computed , Algorithms , Feasibility Studies , Humans , Image Processing, Computer-Assisted , Lung/diagnostic imaging , Phantoms, Imaging , Radiation Dosage , Radiographic Image Interpretation, Computer-Assisted
SELECTION OF CITATIONS
SEARCH DETAIL